A Self-Adaptive Approximated-Gradient-Simulation Method for Black-Box Adversarial Sample Generation

نویسندگان

چکیده

Deep neural networks (DNNs) have famously been applied in various ordinary duties. However, DNNs are sensitive to adversarial attacks which, by adding imperceptible perturbation samples an original image, can easily alter the output. In state-of-the-art white-box attack methods, successfully fool through network gradient. addition, they generate only considering sign information of gradient and dropping magnitude. Accordingly, gradients different magnitudes may adopt same construct samples, resulting inefficiency. Unfortunately, it is often impractical acquire real-world scenarios. Consequently, we propose a self-adaptive approximated-gradient-simulation method for black-box (SAGM) efficient samples. Our proposed uses knowledge-based differential evolution simulate momentum To estimate efficiency SAGM, series experiments were carried out on two datasets, namely MNIST CIFAR-10. Compared techniques, our quickly efficiently search misclassify The results reveal that SAGM effective technique generating

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

With growing interest in adversarial machine learning, it is important for practitioners and users of machine learning to understand how their models may be attacked. We present a web-based visualization tool, ADVERSARIALPLAYGROUND, to demonstrate the efficacy of common adversarial methods against a convolutional neural network. ADVERSARIAL-PLAYGROUND provides users an efficient and effective e...

متن کامل

Query-Efficient Black-box Adversarial Examples

Current neural network-based image classifiers are susceptible to adversarial examples, even in the black-box setting, where the attacker is limited to query access without access to gradients. Previous methods — substitute networks and coordinate-based finite-difference methods — are either unreliable or query-inefficient, making these methods impractical for certain problems. We introduce a n...

متن کامل

Simple Black-Box Adversarial Perturbations for Deep Networks

Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expec...

متن کامل

Convergence of approximated gradient method for Elman network ?

An approximated gradient method for training Elman networks is considered. For finite sample set, the error function is proved to be monotone in the training process, and the approximated gradient of the error function tends to zero if the weights sequence is bounded. Furthermore, after adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example...

متن کامل

Open Category Classification by Adversarial Sample Generation

In real-world classification tasks, it is difficult to collect training samples from all possible categories of the environment. Therefore, when an instance of an unseen class appears in the prediction stage, a robust classifier should be able to tell that it is from an unseen class, instead of classifying it to be any known category. In this paper, adopting the idea of adversarial learning, we...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied sciences

سال: 2023

ISSN: ['2076-3417']

DOI: https://doi.org/10.3390/app13031298